254 research outputs found

    Errors in the administration of intravenous medications in hospital and the role of correct procedures and nurse experience

    Get PDF
    Background: Intravenous medication administrations have a high incidence of error but there is limited evidence of associated factors or error severity. Objective: To measure the frequency, type and severity of intravenous administration errors in hospitals and the associations between errors, procedural failures and nurse experience. Methods: Prospective observational study of 107 nurses preparing and administering 568 intravenous medications on six wards across two teaching hospitals. Procedural failures (eg, checking patient identification) and clinical intravenous errors (eg, wrong intravenous administration rate) were identified and categorised by severity. Results: Of 568 intravenous administrations, 69.7% (n=396; 95% CI 65.9 to 73.5) had at least one clinical error and 25.5% (95% CI 21.2 to 29.8) of these were serious. Four error types (wrong intravenous rate, mixture, volume, and drug incompatibility) accounted for 91.7% of errors. Wrong rate was the most frequent and accounted for 95 of 101 serious errors. Error rates and severity decreased with clinical experience. Each year of experience, up to 6 years, reduced the risk of error by 10.9% and serious error by 18.5%. Administration by bolus was associated with a 312% increased risk of error. Patient identification was only checked in 47.9% of administrations but was associated with a 56% reduction in intravenous error risk. Conclusions: Intravenous administrations have a higher risk and severity of error than other medication administrations. A significant proportion of errors suggest skill and knowledge deficiencies, with errors and severity reducing as clinical experience increases. A proportion of errors are also associated with routine violations which are likely to be learnt workplace behaviours. Both areas suggest specific targets for intervention.8 page(s

    The safety implications of missed test results for hospitalised patients: a systematic review

    Get PDF
    Background: Failure to follow-up test results is a critical safety issue. The objective was to systematically review evidence quantifying the extent of failure to follow-up test results and the impact on patient outcomes. Methods: The authors searched Medline, CINAHL, Embase, Inspec and the Cochrane Database from 1990 to March 2010 for English-language articles which quantified the proportion of diagnostic tests not followed up for hospital patients. Four reviewers independently reviewed titles, abstracts and articles for inclusion. Results: Twelve studies met the inclusion criteria and demonstrated a wide variation in the extent of the problem and the impact on patient outcomes. A lack of follow-up of test results for inpatients ranged from 20.04% to 61.6% and for patients treated in the emergency department ranged from 1.0% to 75% when calculated as a proportion of tests. Two areas where problems were particularly evident were: critical test results and results for patients moving across healthcare settings. Systems used to manage followup of test results were varied and included paperbased, electronic and hybrid paper-and-electronic systems. Evidence of the effectiveness of electronic test management systems was limited. Conclusions: Failure to follow up test results for hospital patients is a substantial problem. Evidence of the negative impacts for patients when important results are not actioned, matched with advances in the functionality of clinical information systems, presents a convincing case for the need to explore solutions. These should include interventions such as on-line endorsement of results.6 page(s

    The impact of PACS on clinician work practices in the intensive care unit: a systematic review of the literature

    Get PDF
    Objective To assess evidence of the impact of Picture Archiving and Communication Systems (PACS) on clinicians' work practices in the intensive care unit (ICU). Methods We searched Medline, Pre-Medline, CINAHL, Embase, and the SPIE Digital Library databases for English-language publications between 1980 and September 2010 using Medical Subject Headings terms and keywords. Results Eleven studies from the USA and UK were included. All studies measured aspects of time associated with the introduction of PACS, namely the availability of images, the time a physician took to review an image, and changes in viewing patterns. Seven studies examined the impact on clinical decision-making, with the majority measuring the time to image-based clinical action. The effect of PACS on communication modes was reported in five studies. Discussion PACS can impact on clinician work practices in three main areas. Most of the evidence suggests an improvement in the efficiency of work practices. Quick image availability can impact on work associated with clinical decision-making, although the results were inconsistent. PACS can change communication practices, particularly between the ICU and radiology; however, the evidence base is insufficient to draw firm conclusions in this area. Conclusion The potential for PACS to impact positively on clinician work practices in the ICU and improve patient care is great. However, the evidence base is limited and does not reflect aspects of contemporary PACS technology. Performance measures developed in previous studies remain relevant, with much left to investigate to understand how PACS can support new and improved ways of delivering care in the intensive care setting.8 page(s

    Health professional networks as a vector for improving healthcare quality and safety: a systematic review

    Get PDF
    Background: While there is a considerable corpus of theoretical and empirical literature on networks within and outside of the health sector, multiple research questions are yet to be answered. Objective: To conduct a systematic review of studies of professionals' network structures, identifying factors associated with network effectiveness and sustainability, particularly in relation to quality of care and patient safety. Methods: The authors searched MEDLINE, CINAHL, EMBASE, Web of Science and Business Source Premier from January 1995 to December 2009. Results: A majority of the 26 unique studies identified used social network analysis to examine structural relationships in networks: structural relationships within and between networks, health professionals and their social context, health collaboratives and partnerships, and knowledge sharing networks. Key aspects of networks explored were administrative and clinical exchanges, network performance, integration, stability and influences on the quality of healthcare. More recent studies show that cohesive and collaborative health professional networks can facilitate the coordination of care and contribute to improving quality and safety of care. Structural network vulnerabilities include cliques, professional and gender homophily, and over-reliance on central agencies or individuals. Conclusions: Effective professional networks employ natural structural network features (eg, bridges, brokers, density, centrality, degrees of separation, social capital, trust) in producing collaboratively oriented healthcare. This requires efficient transmission of information and social and professional interaction within and across networks. For those using networks to improve care, recurring success factors are understanding your network's characteristics, attending to its functioning and investing time in facilitating its improvement. Despite this, there is no guarantee that time spent on networks will necessarily improve patient care

    Social interactions and quality of life of residents in aged care facilities : a multi-methods study

    Get PDF
    Background. The relationship between social contact and quality of life is well-established within the general population. However, limited data exist about the extent of social interactions in residential aged care facilities (RACFs) providing long-term accommodation and care. We aimed to record the frequency and duration of interpersonal interactions among residents in RACFs and identify the association between residents' interpersonal interactions and quality of life (QoL). Materials and methods. A multi-methods study, including time and motion observations and a QoL survey, was conducted between September 2019 to January 2020. Thirty-nine residents from six Australian RACFs were observed between 09:30-17:30 on weekdays. Observations included residents' actions, location of the action, and who the resident was with during the action. At the end of the observation period, residents completed a QoL survey. The proportion of time residents spent on different actions, in which location, and with whom were calculated, and correlations between these factors and QoL were analysed. Results. A total of 312 hours of observations were conducted. Residents spent the greatest proportion of time in their own room (45.2%, 95%CI 40.7-49.8), alone (47.9%, 95%CI 43.0-52.7) and being inactive (25.6%, 95%CI 22.5-28.7). Residents were also largely engaged in interpersonal communication (20.2%, 95%CI 17.9-22.5) and self-initiated or scheduled events (20.5%, 95%CI 18.0-23.0). Residents' interpersonal communication was most likely to occur in the common area (29.3%, 95%CI 22.9-35.7), residents' own room (26.7%, 95%CI 21.0-32.4) or the dining room (24.6%, 95%CI 18.9-30.2), and was most likely with another resident (54.8%, 95%CI 45.7-64.2). Quality of life scores were low (median = 0.68, IQR = 0.54-0.76). Amount of time spent with other residents was positively correlated with QoL (r = 0.39, p = 0.02), whilst amount of time spent with facility staff was negatively correlated with QoL (r = -0.45, p = 0.008). Discussion and conclusions. Our findings confirm an established association between social interactions and improved QoL. Opportunities and activities which encourage residents to engage throughout the day in common facility areas can support resident wellbeing

    Quality of life measurement in community-based aged care : understanding variation between clients and between care service providers

    Get PDF
    Background: Measuring person-centred outcomes and using this information to improve service delivery is a challenge for many care providers. We aimed to identify predictors of QoL among older adults receiving community-based aged care services and examine variation across different community care service outlets. Methods: A retrospective sample of 1141 Australians aged ≄60 years receiving community-based care services from a large service provider within 19 service outlets. Clients’ QoL was captured using the ICEpop CAPability Index. QoL scores and predictors of QoL (i.e. sociodemographic, social participation and service use) were extracted from clients’ electronic records and examined using multivariable regression. Funnel plots were used to examine variation in risk-adjusted QoL scores across service outlets. Results: Mean age was 81.5 years (SD = 8) and 75.5% were women. Clients had a mean QoL score of 0.81 (range 0– 1, SD = 0.15). After accounting for other factors, being older (p < 0.01), having lower-level care needs (p < 0.01), receiving services which met needs for assistance with activities of daily living (p < 0.01), and having higher levels of social participation (p < 0.001) were associated with higher QoL scores. Of the 19 service outlets, 21% (n = 4) had lower mean risk-adjusted QoL scores than expected (< 95% control limits) and 16% (n = 3) had higher mean scores than expected. Conclusion: Using QoL as an indicator to compare care quality may be feasible, with appropriate risk adjustment. Implementing QoL tools allows providers to measure and monitor their performance and service outcomes, as well as identify clients with poor quality of life who may need extra support

    Systematic review of 29 self-report instruments for assessing quality of life in older adults receiving aged care services

    Get PDF
    Background: Quality of life (QoL) outcomes are used to monitor quality of care for older adults accessing aged care services, yet it remains unclear which QoL instruments best meet older adults', providers' and policymakers' needs. This review aimed to (1) identify QoL instruments used in aged care and describe them in terms of QoL domains measured and logistical details; (2) summarise in which aged care settings the instruments have been used and (3) discuss factors to consider in deciding on the suitability of QoL instruments for use in aged care services. Design: Systematic review. Data sources: MEDLINE, EMBASE, PsycINFO, Cochrane Library and CINAHL from inception to 2021. Eligibility criteria: Instruments were included if they were designed for adults (>18 years), available in English, been applied in a peer-reviewed research study examining QoL outcomes in adults >65 years accessing aged care (including home/social care, residential/long-term care) and had reported psychometrics. Data extraction and synthesis: Two researchers independently reviewed the measures and extracted the data. Data synthesis was performed via narrative review of eligible instruments. Results: 292 articles reporting on 29 QoL instruments were included. Eight domains of QoL were addressed: physical health, mental health, emotional state, social connection, environment, autonomy and overall QoL. The period between 1990 and 2000 produced the greatest number of newly developed instruments. The EuroQoL-5 Dimensions (EQ-5D) and Short Form-series were used across multiple aged care contexts including home and residential care. More recent instruments (eg, ICEpop CAPability measure for Older people (ICECAP-O) and Adult Social Care Outcomes Toolkit (ASCOT)) tend to capture emotional sentiment towards personal circumstances and higher order care needs, in comparison with more established instruments (eg, EQ-5D) which are largely focused on health status. Conclusions: A comprehensive list of QoL instruments and their characteristics is provided to inform instrument choice for use in research or for care quality assurance in aged care settings, depending on needs and interests of users

    Drivers of unprofessional behaviour between staff in acute care hospitals:a realist review

    Get PDF
    Background: Unprofessional behaviours (UB) between healthcare staff are rife in global healthcare systems, negatively impacting staff wellbeing, patient safety and care quality. Drivers of UBs include organisational, situational, team, and leadership issues which interact in complex ways. An improved understanding of these factors and their interactions would enable future interventions to better target these drivers of UB. Methods: A realist review following RAMESES guidelines was undertaken with stakeholder input. Initial theories were formulated drawing on reports known to the study team and scoping searches. A systematic search of databases including Embase, CINAHL, MEDLINE and HMIC was performed to identify literature for theory refinement. Data were extracted from these reports, synthesised, and initial theories tested, to produce refined programme theories. Results: We included 81 reports (papers) from 2,977 deduplicated records of grey and academic reports, and 28 via Google, stakeholders, and team members, yielding a total of 109 reports. Five categories of contributor were formulated: (1) workplace disempowerment; (2) harmful workplace processes and cultures; (3) inhibited social cohesion; (4) reduced ability to speak up; and (5) lack of manager awareness and urgency. These resulted in direct increases to UB, reduced ability of staff to cope, and reduced ability to report, challenge or address UB. Twenty-three theories were developed to explain how these contributors work and interact, and how their outcomes differ across diverse staff groups. Staff most at risk of UB include women, new staff, staff with disabilities, and staff from minoritised groups. UB negatively impacted patient safety by impairing concentration, communication, ability to learn, confidence, and interpersonal trust. Conclusion: Existing research has focused primarily on individual characteristics, but these are inconsistent, difficult to address, and can be used to deflect organisational responsibility. We present a comprehensive programme theory furthering understanding of contributors to UB, how they work and why, how they interact, whom they affect, and how patient safety is impacted. More research is needed to understand how and why minoritised staff are disproportionately affected by UB. Study registration: This study was registered on the international database of prospectively registered systematic reviews in health and social care (PROSPERO): https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021255490

    Drivers of unprofessional behaviour between staff in acute care hospitals: a realist review

    Get PDF
    Background: Unprofessional behaviours (UB) between healthcare staff are rife in global healthcare systems, negatively impacting staff wellbeing, patient safety and care quality. Drivers of UBs include organisational, situational, team, and leadership issues which interact in complex ways. An improved understanding of these factors and their interactions would enable future interventions to better target these drivers of UB. Methods: A realist review following RAMESES guidelines was undertaken with stakeholder input. Initial theories were formulated drawing on reports known to the study team and scoping searches. A systematic search of databases including Embase, CINAHL, MEDLINE and HMIC was performed to identify literature for theory refinement. Data were extracted from these reports, synthesised, and initial theories tested, to produce refined programme theories. Results: We included 81 reports (papers) from 2,977 deduplicated records of grey and academic reports, and 28 via Google, stakeholders, and team members, yielding a total of 109 reports. Five categories of contributor were formulated: (1) workplace disempowerment; (2) harmful workplace processes and cultures; (3) inhibited social cohesion; (4) reduced ability to speak up; and (5) lack of manager awareness and urgency. These resulted in direct increases to UB, reduced ability of staff to cope, and reduced ability to report, challenge or address UB. Twenty-three theories were developed to explain how these contributors work and interact, and how their outcomes differ across diverse staff groups. Staff most at risk of UB include women, new staff, staff with disabilities, and staff from minoritised groups. UB negatively impacted patient safety by impairing concentration, communication, ability to learn, confidence, and interpersonal trust. Conclusion: Existing research has focused primarily on individual characteristics, but these are inconsistent, difficult to address, and can be used to deflect organisational responsibility. We present a comprehensive programme theory furthering understanding of contributors to UB, how they work and why, how they interact, whom they affect, and how patient safety is impacted. More research is needed to understand how and why minoritised staff are disproportionately affected by UB. Study registration: This study was registered on the international database of prospectively registered systematic reviews in health and social care (PROSPERO): https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021255490

    The use and predictive performance of the Peninsula Health Falls Risk Assessment Tool (PH-FRAT) in 25 residential aged care facilities : a retrospective cohort study using routinely collected data

    Get PDF
    Background: The Peninsula Health Falls Risk Assessment Tool (PH-FRAT) is a validated and widely applied tool in residential aged care facilities (RACFs) in Australia. However, research regarding its use and predictive performance is limited. This study aimed to determine the use and performance of PH-FRAT in predicting falls in RACF residents. Methods: A retrospective cohort study using routinely-collected data from 25 RACFs in metropolitan Sydney, Australia from Jul 2014-Dec 2019. A total of 5888 residents aged ≄65 years who were assessed at least once using the PH-FRAT were included in the study. The PH-FRAT risk score ranges from 5 to 20 with a score > 14 indicating fallers and ≀ 14 non-fallers. The predictive performance of PH-FRAT was determined using metrics including area under receiver operating characteristics curve (AUROC), sensitivity, specificity, sensitivityEvent Rate(ER) and specificityER. Results: A total of 27,696 falls were reported over 3,689,561 resident days (a crude incident rate of 7.5 falls /1000 resident days). A total of 38,931 PH-FRAT assessments were conducted with a median of 4 assessments per resident, a median of 43.8 days between assessments, and an overall median fall risk score of 14. Residents with multiple assessments had increased risk scores over time. The baseline PH-FRAT demonstrated a low AUROC of 0.57, sensitivity of 26.0% (sensitivityER 33.6%) and specificity of 88.8% (specificityER 82.0%). The follow-up PH-FRAT assessments increased sensitivityER values although the specificityER decreased. The performance of PH-FRAT improved using a lower risk score cut-off of 10 with AUROC of 0.61, sensitivity of 67.5% (sensitivityER 74.4%) and specificity of 55.2% (specificityER 45.6%). Conclusions: Although PH-FRAT is frequently used in RACFs, it demonstrated poor predictive performance raising concerns about its value. Introducing a lower PH-FRAT cut-off score of 10 marginally enhanced its predictive performance. Future research should focus on understanding the feasibility and accuracy of dynamic fall risk predictive tools, which may serve to better identify residents at risk of falls
    • 

    corecore